6,680 research outputs found

    Discretized Multinomial Distributions and Nash Equilibria in Anonymous Games

    Full text link
    We show that there is a polynomial-time approximation scheme for computing Nash equilibria in anonymous games with any fixed number of strategies (a very broad and important class of games), extending the two-strategy result of Daskalakis and Papadimitriou 2007. The approximation guarantee follows from a probabilistic result of more general interest: The distribution of the sum of n independent unit vectors with values ranging over {e1, e2, ...,ek}, where ei is the unit vector along dimension i of the k-dimensional Euclidean space, can be approximated by the distribution of the sum of another set of independent unit vectors whose probabilities of obtaining each value are multiples of 1/z for some integer z, and so that the variational distance of the two distributions is at most eps, where eps is bounded by an inverse polynomial in z and a function of k, but with no dependence on n. Our probabilistic result specifies the construction of a surprisingly sparse eps-cover -- under the total variation distance -- of the set of distributions of sums of independent unit vectors, which is of interest on its own right.Comment: In the 49th Annual IEEE Symposium on Foundations of Computer Science, FOCS 200

    Optimum Statistical Estimation with Strategic Data Sources

    Full text link
    We propose an optimum mechanism for providing monetary incentives to the data sources of a statistical estimator such as linear regression, so that high quality data is provided at low cost, in the sense that the sum of payments and estimation error is minimized. The mechanism applies to a broad range of estimators, including linear and polynomial regression, kernel regression, and, under some additional assumptions, ridge regression. It also generalizes to several objectives, including minimizing estimation error subject to budget constraints. Besides our concrete results for regression problems, we contribute a mechanism design framework through which to design and analyze statistical estimators whose examples are supplied by workers with cost for labeling said examples

    Incremental Recompilation of Knowledge

    Full text link
    Approximating a general formula from above and below by Horn formulas (its Horn envelope and Horn core, respectively) was proposed by Selman and Kautz (1991, 1996) as a form of ``knowledge compilation,'' supporting rapid approximate reasoning; on the negative side, this scheme is static in that it supports no updates, and has certain complexity drawbacks pointed out by Kavvadias, Papadimitriou and Sideri (1993). On the other hand, the many frameworks and schemes proposed in the literature for theory update and revision are plagued by serious complexity-theoretic impediments, even in the Horn case, as was pointed out by Eiter and Gottlob (1992), and is further demonstrated in the present paper. More fundamentally, these schemes are not inductive, in that they may lose in a single update any positive properties of the represented sets of formulas (small size, Horn structure, etc.). In this paper we propose a new scheme, incremental recompilation, which combines Horn approximation and model-based updates; this scheme is inductive and very efficient, free of the problems facing its constituents. A set of formulas is represented by an upper and lower Horn approximation. To update, we replace the upper Horn formula by the Horn envelope of its minimum-change update, and similarly the lower one by the Horn core of its update; the key fact which enables this scheme is that Horn envelopes and cores are easy to compute when the underlying formula is the result of a minimum-change update of a Horn formula by a clause. We conjecture that efficient algorithms are possible for more complex updates.Comment: See http://www.jair.org/ for any accompanying file

    The Complexity of the Homotopy Method, Equilibrium Selection, and Lemke-Howson Solutions

    Full text link
    We show that the widely used homotopy method for solving fixpoint problems, as well as the Harsanyi-Selten equilibrium selection process for games, are PSPACE-complete to implement. Extending our result for the Harsanyi-Selten process, we show that several other homotopy-based algorithms for finding equilibria of games are also PSPACE-complete to implement. A further application of our techniques yields the result that it is PSPACE-complete to compute any of the equilibria that could be found via the classical Lemke-Howson algorithm, a complexity-theoretic strengthening of the result in [Savani and von Stengel]. These results show that our techniques can be widely applied and suggest that the PSPACE-completeness of implementing homotopy methods is a general principle.Comment: 23 pages, 1 figure; to appear in FOCS 2011 conferenc

    First-order transition in small-world networks

    Full text link
    The small-world transition is a first-order transition at zero density pp of shortcuts, whereby the normalized shortest-path distance undergoes a discontinuity in the thermodynamic limit. On finite systems the apparent transition is shifted by Δp∼L−d\Delta p \sim L^{-d}. Equivalently a ``persistence size'' L∗∼p−1/dL^* \sim p^{-1/d} can be defined in connection with finite-size effects. Assuming L∗∼p−τL^* \sim p^{-\tau}, simple rescaling arguments imply that τ=1/d\tau=1/d. We confirm this result by extensive numerical simulation in one to four dimensions, and argue that τ=1/d\tau=1/d implies that this transition is first-order.Comment: 4 pages, 3 figures, To appear in Europhysics Letter
    • …
    corecore